Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
J Biomed Inform ; 130: 104078, 2022 06.
Article in English | MEDLINE | ID: covidwho-1804424

ABSTRACT

Scientific evidence shows that acoustic analysis could be an indicator for diagnosing COVID-19. From analyzing recorded breath sounds on smartphones, it is discovered that patients with COVID-19 have different patterns in both the time domain and frequency domain. These patterns are used in this paper to diagnose the infection of COVID-19. Statistics of the sound signals, analysis in the frequency domain, and Mel-Frequency Cepstral Coefficients (MFCCs) are then calculated and applied in two classifiers, k-Nearest Neighbors (kNN) and Convolutional Neural Network (CNN), to diagnose whether a user is contracted with COVID-19 or not. Test results show that, amazingly, an accuracy of over 97% could be achieved with a CNN classifier and more than 85% on kNN with optimized features. Optimization methods for selecting the best features and using various metrics to evaluate the performance are also demonstrated in this paper. Owing to the high accuracy of the CNN model, the CNN model was implemented in an Android app to diagnose COVID-19 with a probability to indicate the confidence level. The initial medical test shows a similar test result between the method proposed in this paper and the lateral flow method, which indicates that the proposed method is feasible and effective. Because of the use of breath sound and tested on the smartphone, this method could be used by everybody regardless of the availability of other medical resources, which could be a powerful tool for society to diagnose COVID-19.


Subject(s)
Artificial Intelligence , COVID-19 , Acoustics , COVID-19/diagnosis , Humans , Neural Networks, Computer , Respiratory Sounds/diagnosis , Smartphone
2.
IEEE Trans Biomed Eng ; 69(9): 2872-2882, 2022 09.
Article in English | MEDLINE | ID: covidwho-1752445

ABSTRACT

Computational methods for lung sound analysis are beneficial for computer-aided diagnosis support, storage and monitoring in critical care. In this paper, we use pre-trained ResNet models as backbone architectures for classification of adventitious lung sounds and respiratory diseases. The learned representation of the pre-trained model is transferred by using vanilla fine-tuning, co-tuning, stochastic normalization and the combination of the co-tuning and stochastic normalization techniques. Furthermore, data augmentation in both time domain and time-frequency domain is used to account for the class imbalance of the ICBHI and our multi-channel lung sound dataset. Additionally, we introduce spectrum correction to account for the variations of the recording device properties on the ICBHI dataset. Empirically, our proposed systems mostly outperform all state-of-the-art lung sound classification systems for the adventitious lung sounds and respiratory diseases of both datasets.


Subject(s)
Diagnosis, Computer-Assisted , Respiratory Sounds , Humans , Lung , Respiratory Sounds/diagnosis
3.
PLoS One ; 17(1): e0262448, 2022.
Article in English | MEDLINE | ID: covidwho-1622364

ABSTRACT

This study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein that relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). The statistical analysis of patient profiles has shown a significant difference (p-value: 0.041) for ischemic heart disease between COVID-19 and healthy subjects. The Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59±1.74, deep: 0.65±4.35, p-value: <0.001). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.


Subject(s)
COVID-19/diagnosis , Mass Screening/instrumentation , Mass Screening/methods , Respiratory Sounds/diagnosis , Adolescent , Adult , Aged , Deep Learning , Female , Humans , Male , Middle Aged , Neural Networks, Computer , Pandemics/prevention & control , ROC Curve , SARS-CoV-2/pathogenicity , Smartphone , Young Adult
4.
Sensors (Basel) ; 21(16)2021 Aug 18.
Article in English | MEDLINE | ID: covidwho-1376960

ABSTRACT

Intelligent systems are transforming the world, as well as our healthcare system. We propose a deep learning-based cough sound classification model that can distinguish between children with healthy versus pathological coughs such as asthma, upper respiratory tract infection (URTI), and lower respiratory tract infection (LRTI). To train a deep neural network model, we collected a new dataset of cough sounds, labelled with a clinician's diagnosis. The chosen model is a bidirectional long-short-term memory network (BiLSTM) based on Mel-Frequency Cepstral Coefficients (MFCCs) features. The resulting trained model when trained for classifying two classes of coughs-healthy or pathology (in general or belonging to a specific respiratory pathology)-reaches accuracy exceeding 84% when classifying the cough to the label provided by the physicians' diagnosis. To classify the subject's respiratory pathology condition, results of multiple cough epochs per subject were combined. The resulting prediction accuracy exceeds 91% for all three respiratory pathologies. However, when the model is trained to classify and discriminate among four classes of coughs, overall accuracy dropped: one class of pathological coughs is often misclassified as the other. However, if one considers the healthy cough classified as healthy and pathological cough classified to have some kind of pathology, then the overall accuracy of the four-class model is above 84%. A longitudinal study of MFCC feature space when comparing pathological and recovered coughs collected from the same subjects revealed the fact that pathological coughs, irrespective of the underlying conditions, occupy the same feature space making it harder to differentiate only using MFCC features.


Subject(s)
Asthma , Cough , Asthma/diagnosis , Child , Cough/diagnosis , Humans , Longitudinal Studies , Neural Networks, Computer , Respiratory Sounds/diagnosis , Sound
5.
J Biol Phys ; 47(2): 103-115, 2021 06.
Article in English | MEDLINE | ID: covidwho-1202797

ABSTRACT

The paper delves into the plausibility of applying fractal, spectral, and nonlinear time series analyses for lung auscultation. The thirty-five sound signals of bronchial (BB) and pulmonary crackle (PC) analysed by fast Fourier transform and wavelet not only give the details of number, nature, and time of occurrence of the frequency components but also throw light onto the embedded air flow during breathing. Fractal dimension, phase portrait, and sample entropy help in divulging the greater randomness, antipersistent nature, and complexity of airflow dynamics in BB than PC. The potential of principal component analysis through the spectral feature extraction categorises BB, fine crackles, and coarse crackles. The phase portrait feature-based supervised classification proves to be better compared to the unsupervised machine learning technique. The present work elucidates phase portrait features as a better choice of classification, as it takes into consideration the temporal correlation between the data points of the time series signal, and thereby suggesting a novel surrogate method for the diagnosis in pulmonology. The study suggests the possible application of the techniques in the auscultation of coronavirus disease 2019 seriously affecting the respiratory system.


Subject(s)
Auscultation , Machine Learning , Respiratory Sounds/diagnosis , Signal Processing, Computer-Assisted , COVID-19/physiopathology , Fourier Analysis , Humans , Principal Component Analysis
6.
J Acoust Soc Am ; 148(6): 3385, 2020 12.
Article in English | MEDLINE | ID: covidwho-991716

ABSTRACT

Forced expiratory (FE) noise is a powerful bioacoustic signal containing information on human lung biomechanics. FE noise is attributed to a broadband part and narrowband components-forced expiratory wheezes (FEWs). FE respiratory noise is composed by acoustic and hydrodynamic mechanisms. An origin of the most powerful mid-frequency FEWs (400-600 Hz) is associated with the 0th-3rd levels of bronchial tree in terms of Weibel [(2009). Swiss Med. Wkly. 139(27-28), 375-386], whereas high-frequency FEWs (above 600 Hz) are attributed to the 2nd-6th levels of bronchial tree. The laboratory prototype of the apparatus is developed, which includes the electret microphone sensor with stethoscope head, a laptop with external sound card, and specially developed software. An analysis of signals by the new method, including FE time in the range from 200 to 2000 Hz and band-pass durations and energies in the 200-Hz bands evaluation, is applied instead of FEWs direct measures. It is demonstrated experimentally that developed FE acoustic parameters correspond to basic indices of lung function evaluated by spirometry and body plethysmography and may be even more sensitive to some respiratory deviations. According to preliminary experimental results, the developed technique may be considered as a promising instrument for acoustic monitoring human lung function in extreme conditions, including diving and space flights. The developed technique eliminates the contact of the sensor with the human oral cavity, which is characteristic for spirometry and body plethysmography. It reduces the risk of respiratory cross-contamination, especially during outpatient and field examinations, and may be especially relevant in the context of the COVID-19 pandemic.


Subject(s)
Acoustics/instrumentation , COVID-19 , Exhalation/physiology , Respiratory Sounds/diagnosis , Humans , Noise , SARS-CoV-2
8.
Intern Med ; 59(24): 3213-3216, 2020 Dec 15.
Article in English | MEDLINE | ID: covidwho-902224

ABSTRACT

A 60-year-old woman was admitted to our hospital due to coronavirus disease 2019 (COVID-19) pneumonia with a chief complaint of persistent low-grade fever and dry cough for two weeks. Thoracic computed tomography demonstrated a crazy paving pattern in the bilateral lower lobes. In a COVID-19 ward, we used a novel wireless stethoscope with a telemedicine system and successfully recorded and shared the lung sounds in real-time between the red and green zones. The fine crackles at the posterior right lower lung fields changed from mid-to-late (day 1) to late inspiratory crackles (day 3), which disappeared at day 5 along with an improvement in both the clinical symptoms and thoracic CT findings.


Subject(s)
Auscultation/instrumentation , COVID-19/diagnosis , Respiratory Sounds/diagnosis , SARS-CoV-2 , Stethoscopes , Telemedicine/methods , COVID-19/epidemiology , Equipment Design , Female , Humans , Middle Aged , Tomography, X-Ray Computed/methods
9.
Sensors (Basel) ; 20(18)2020 Sep 08.
Article in English | MEDLINE | ID: covidwho-760951

ABSTRACT

Lung sounds acquired by stethoscopes are extensively used in diagnosing and differentiating respiratory diseases. Although an extensive know-how has been built to interpret these sounds and identify diseases associated with certain patterns, its effective use is limited to individual experience of practitioners. This user-dependency manifests itself as a factor impeding the digital transformation of this valuable diagnostic tool, which can improve patient outcomes by continuous long-term respiratory monitoring under real-life conditions. Particularly patients suffering from respiratory diseases with progressive nature, such as chronic obstructive pulmonary diseases, are expected to benefit from long-term monitoring. Recently, the COVID-19 pandemic has also shown the lack of respiratory monitoring systems which are ready to deploy in operational conditions while requiring minimal patient education. To address particularly the latter subject, in this article, we present a sound acquisition module which can be integrated into a dedicated garment; thus, minimizing the role of the patient for positioning the stethoscope and applying the appropriate pressure. We have implemented a diaphragm-less acousto-electric transducer by stacking a silicone rubber and a piezoelectric film to capture thoracic sounds with minimum attenuation. Furthermore, we benchmarked our device with an electronic stethoscope widely used in clinical practice to quantify its performance.


Subject(s)
Betacoronavirus , Clinical Laboratory Techniques/instrumentation , Coronavirus Infections/diagnosis , Coronavirus Infections/physiopathology , Monitoring, Ambulatory/instrumentation , Pneumonia, Viral/diagnosis , Pneumonia, Viral/physiopathology , Respiratory Sounds/diagnosis , Respiratory Sounds/physiopathology , Stethoscopes , Wearable Electronic Devices , Acoustics , Auscultation/instrumentation , COVID-19 , COVID-19 Testing , Electric Impedance , Equipment Design , Humans , Pandemics , Remote Sensing Technology/instrumentation , SARS-CoV-2 , Signal Processing, Computer-Assisted , Transducers , Wireless Technology/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL